Interpretable entity representations (IERs) are sparse embeddings that are "human-readable" in that dimensions correspond to fine-grained entity types and values are predicted probabilities that a given entity is of the corresponding type. These methods perform well in zero-shot and low supervision settings. Compared to standard dense neural embeddings, such interpretable representations may permit analysis and debugging. However, while fine-tuning sparse, interpretable representations improves accuracy on downstream tasks, it destroys the semantics of the dimensions which were enforced in pre-training. Can we maintain the interpretable semantics afforded by IERs while improving predictive performance on downstream tasks? Toward this end, we propose Intermediate enTity-based Sparse Interpretable Representation Learning (ItsIRL). ItsIRL realizes improved performance over prior IERs on biomedical tasks, while maintaining "interpretability" generally and their ability to support model debugging specifically. The latter is enabled in part by the ability to perform "counterfactual" fine-grained entity type manipulation, which we explore in this work. Finally, we propose a method to construct entity type based class prototypes for revealing global semantic properties of classes learned by our model.
translated by 谷歌翻译
关于比较治疗效果的最佳证据来自临床试验,其结果在非结构化的文章中据报道。医疗专家必须手动提取文章中的信息以告知决策,这是耗时和昂贵的。在这里,我们考虑(a)从描述临床试验(实体识别)的全文物品中提取治疗和结果的端到端任务,(b)推断前者的报告结果(关系萃取)。我们为此任务介绍了新数据,并评估最近在自然语言处理中获得类似任务的最先进结果的模型。然后,我们提出了一种新的方法,激励了通常介绍了如何呈现这些纯粹数据驱动的基线的试验结果。最后,我们对该模型进行了一定的评估,并具有非营利性寻求鉴定可能重新用癌症的现有药物,显示出端到端证据提取系统的潜在效用。
translated by 谷歌翻译
Attention mechanisms have seen wide adoption in neural NLP models. In addition to improving predictive performance, these are often touted as affording transparency: models equipped with attention provide a distribution over attended-to input units, and this is often presented (at least implicitly) as communicating the relative importance of inputs. However, it is unclear what relationship exists between attention weights and model outputs. In this work we perform extensive experiments across a variety of NLP tasks that aim to assess the degree to which attention weights provide meaningful "explanations" for predictions. We find that they largely do not. For example, learned attention weights are frequently uncorrelated with gradient-based measures of feature importance, and one can identify very different attention distributions that nonetheless yield equivalent predictions. Our findings show that standard attention modules do not provide meaningful explanations and should not be treated as though they do. Code to reproduce all experiments is available at https://github.com/successar/ AttentionExplanation.
translated by 谷歌翻译
倾斜的随机生存森林(RSF)是一种用于右翼结果的合奏监督学习方法。斜RSF中的树是使用预测变量的线性组合生长的,以创建分支,而在标准RSF中,使用单个预测变量。倾斜的RSF集合通常比标准RSF合奏具有更高的预测准确性。但是,评估预测变量的所有可能的线性组合会诱导大量的计算开销,从而将应用限制为大规模数据集。此外,几乎没有开发用于解释斜RSF合奏的方法,与基于轴的对应物相比,它们仍然难以解释。我们介绍了一种提高斜力RSF计算效率的方法,以及一种用斜RSF估计单个预测变量重要性的方法。我们减少计算开销的策略是利用牛顿 - 拉夫森评分(Newton-Raphson)评分,这是一种经典的优化技术,我们适用于决策树的每个非叶子节点内的COX部分似然函数。我们通过在线性组合中否定了用于给定预测指标的每个系数,然后计算出降低的降低准确性,从而估计单个预测因子对斜RSF的重要性。通常,在基准测试实验中,我们发现,与现有的斜RSF相比,与现有软件相比,我们对斜RSF的实现速度约为450倍,而较高的Brier得分则要高450倍。我们在模拟研究中发现,“否定重要性”比置换重要性,莎普利添加性解释和先前引入的技术更可靠地区分相关和无关的预测因子,以基于方差分析来衡量斜RSF的可变重要性。当前研究中引入的方法可在AORSF R软件包中获得。
translated by 谷歌翻译
多任务学习在NLP中是有用的,因为实际上是希望在一系列任务中有一个型号的单个模型。在医疗领域,对任务的顺序培训可能有时是培训模型的唯一方法,因为因为对原始(潜在敏感)数据的访问不再可用,或者只是由于联合再培训所固有的计算成本。然而,顺序学习固有的一个主要问题是灾难性的遗忘,即,当为新任务更新模型时,对先前任务的准确性大幅下降。弹性重量整合是最近提出的解决这个问题的方法,但是将这种方法扩展到实践中使用的现代大型模型需要对模型参数进行强烈的独立假设,限制其有效性。在这项工作中,我们应用了Kronecker分解 - 最近的方法可以放松独立假设 - 以防止在规模的卷积和变压器的神经网络中灾难忘记。我们展示了该技术对在三个数据集中的医疗实体链接的重要和说明性任务中的有效性,证明了在新的医疗数据可用时,用于对现有方法进行有效更新的技术的能力。平均而言,当使用基于BERT的模型时,所提出的方法将灾难性忘记减少51%,相比使用标准弹性重量固结的27%减少,同时保持与模型参数数量成比例的空间复杂性。
translated by 谷歌翻译
There are multiple scales of abstraction from which we can describe the same image, depending on whether we are focusing on fine-grained details or a more global attribute of the image. In brain mapping, learning to automatically parse images to build representations of both small-scale features (e.g., the presence of cells or blood vessels) and global properties of an image (e.g., which brain region the image comes from) is a crucial and open challenge. However, most existing datasets and benchmarks for neuroanatomy consider only a single downstream task at a time. To bridge this gap, we introduce a new dataset, annotations, and multiple downstream tasks that provide diverse ways to readout information about brain structure and architecture from the same image. Our multi-task neuroimaging benchmark (MTNeuro) is built on volumetric, micrometer-resolution X-ray microtomography images spanning a large thalamocortical section of mouse brain, encompassing multiple cortical and subcortical regions. We generated a number of different prediction challenges and evaluated several supervised and self-supervised models for brain-region prediction and pixel-level semantic segmentation of microstructures. Our experiments not only highlight the rich heterogeneity of this dataset, but also provide insights into how self-supervised approaches can be used to learn representations that capture multiple attributes of a single image and perform well on a variety of downstream tasks. Datasets, code, and pre-trained baseline models are provided at: https://mtneuro.github.io/ .
translated by 谷歌翻译
The purpose of this work was to tackle practical issues which arise when using a tendon-driven robotic manipulator with a long, passive, flexible proximal section in medical applications. A separable robot which overcomes difficulties in actuation and sterilization is introduced, in which the body containing the electronics is reusable and the remainder is disposable. A control input which resolves the redundancy in the kinematics and a physical interpretation of this redundancy are provided. The effect of a static change in the proximal section angle on bending angle error was explored under four testing conditions for a sinusoidal input. Bending angle error increased for increasing proximal section angle for all testing conditions with an average error reduction of 41.48% for retension, 4.28% for hysteresis, and 52.35% for re-tension + hysteresis compensation relative to the baseline case. Two major sources of error in tracking the bending angle were identified: time delay from hysteresis and DC offset from the proximal section angle. Examination of these error sources revealed that the simple hysteresis compensation was most effective for removing time delay and re-tension compensation for removing DC offset, which was the primary source of increasing error. The re-tension compensation was also tested for dynamic changes in the proximal section and reduced error in the final configuration of the tip by 89.14% relative to the baseline case.
translated by 谷歌翻译
Compliance in actuation has been exploited to generate highly dynamic maneuvers such as throwing that take advantage of the potential energy stored in joint springs. However, the energy storage and release could not be well-timed yet. On the contrary, for multi-link systems, the natural system dynamics might even work against the actual goal. With the introduction of variable stiffness actuators, this problem has been partially addressed. With a suitable optimal control strategy, the approximate decoupling of the motor from the link can be achieved to maximize the energy transfer into the distal link prior to launch. However, such continuous stiffness variation is complex and typically leads to oscillatory swing-up motions instead of clear launch sequences. To circumvent this issue, we investigate decoupling for speed maximization with a dedicated novel actuator concept denoted Bi-Stiffness Actuation. With this, it is possible to fully decouple the link from the joint mechanism by a switch-and-hold clutch and simultaneously keep the elastic energy stored. We show that with this novel paradigm, it is not only possible to reach the same optimal performance as with power-equivalent variable stiffness actuation, but even directly control the energy transfer timing. This is a major step forward compared to previous optimal control approaches, which rely on optimizing the full time-series control input.
translated by 谷歌翻译
The previous fine-grained datasets mainly focus on classification and are often captured in a controlled setup, with the camera focusing on the objects. We introduce the first Fine-Grained Vehicle Detection (FGVD) dataset in the wild, captured from a moving camera mounted on a car. It contains 5502 scene images with 210 unique fine-grained labels of multiple vehicle types organized in a three-level hierarchy. While previous classification datasets also include makes for different kinds of cars, the FGVD dataset introduces new class labels for categorizing two-wheelers, autorickshaws, and trucks. The FGVD dataset is challenging as it has vehicles in complex traffic scenarios with intra-class and inter-class variations in types, scale, pose, occlusion, and lighting conditions. The current object detectors like yolov5 and faster RCNN perform poorly on our dataset due to a lack of hierarchical modeling. Along with providing baseline results for existing object detectors on FGVD Dataset, we also present the results of a combination of an existing detector and the recent Hierarchical Residual Network (HRN) classifier for the FGVD task. Finally, we show that FGVD vehicle images are the most challenging to classify among the fine-grained datasets.
translated by 谷歌翻译
The task of reconstructing 3D human motion has wideranging applications. The gold standard Motion capture (MoCap) systems are accurate but inaccessible to the general public due to their cost, hardware and space constraints. In contrast, monocular human mesh recovery (HMR) methods are much more accessible than MoCap as they take single-view videos as inputs. Replacing the multi-view Mo- Cap systems with a monocular HMR method would break the current barriers to collecting accurate 3D motion thus making exciting applications like motion analysis and motiondriven animation accessible to the general public. However, performance of existing HMR methods degrade when the video contains challenging and dynamic motion that is not in existing MoCap datasets used for training. This reduces its appeal as dynamic motion is frequently the target in 3D motion recovery in the aforementioned applications. Our study aims to bridge the gap between monocular HMR and multi-view MoCap systems by leveraging information shared across multiple video instances of the same action. We introduce the Neural Motion (NeMo) field. It is optimized to represent the underlying 3D motions across a set of videos of the same action. Empirically, we show that NeMo can recover 3D motion in sports using videos from the Penn Action dataset, where NeMo outperforms existing HMR methods in terms of 2D keypoint detection. To further validate NeMo using 3D metrics, we collected a small MoCap dataset mimicking actions in Penn Action,and show that NeMo achieves better 3D reconstruction compared to various baselines.
translated by 谷歌翻译